H 1 Optimality of the LMS Algorithm 1
نویسندگان
چکیده
We show that the celebrated LMS (Least-Mean Squares) adaptive algorithm is H 1 optimal. The LMS algorithm has been long regarded as an approximate solution to either a stochastic or a deterministic least-squares problem, and it essentially amounts to updating the weight vector estimates along the direction of the instantaneous gradient of a quadratic cost function. In this paper we show that LMS can be regarded as the exact solution to a minimization problem in its own right. Namely, we establish that it is a minimax lter: it minimizes the maximum energy gain from the disturbances to the predicted errors, while the closely related so-called normalized LMS algorithm minimizes the maximum energy gain from the disturbances to the ltered errors. Moreover, since these algorithms are central H 1 lters, they minimize a certain exponential cost function and are thus also risk-sensitive optimal. We discuss the various implications of these results, and show how they provide theoretical justiication for the widely observed excellent robustness properties of the LMS lter. manuscript is submitted for publication with the understanding that the US Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation thereon.
منابع مشابه
Frequency Estimation of Unbalanced Three-Phase Power System using a New LMS Algorithm
This paper presents a simple and easy implementable Least Mean Square (LMS) type approach for frequency estimation of three phase power system in an unbalanced condition. The proposed LMS type algorithm is based on a second order recursion for the complex voltage derived from Clarke's transformation which is proved in the paper. The proposed algorithm is real adaptive filter with real parameter...
متن کاملH 1 Optimality Criteria for Lms and Backpropagation
We have recently shown that the widely known LMS algorithm is an H 1 optimal estimator. The H 1 criterion has been introduced, initially in the control theory literature, as a means to ensure robust performance in the face of model uncertainties and lack of statistical information on the exogenous signals. We extend here our analysis to the nonlinear setting often encountered in neural networks...
متن کاملThe Wavelet Transform-Domain LMS Adaptive Filter Algorithm with Variable Step-Size
The wavelet transform-domain least-mean square (WTDLMS) algorithm uses the self-orthogonalizing technique to improve the convergence performance of LMS. In WTDLMS algorithm, the trade-off between the steady-state error and the convergence rate is obtained by the fixed step-size. In this paper, the WTDLMS adaptive algorithm with variable step-size (VSS) is established. The step-size in each subf...
متن کاملImproved Adaline Networks for Robust Pattern Classification
The Adaline network [1] is a classic neural architecture whose learning rule is the famous least mean squares (LMS) algorithm (a.k.a. delta rule or Widrow-Hoff rule). It has been demonstrated that the LMS algorithm is optimal in H∞ sense since it tolerates small (in energy) disturbances, such as measurement noise, parameter drifting and modelling errors [2,3]. Such optimality of the LMS algorit...
متن کاملGenetic algorithm for Echo cancelling
In this paper, echo cancellation is done using genetic algorithm (GA). The genetic algorithm is implemented by two kinds of crossovers; heuristic and microbial. A new procedure is proposed to estimate the coefficients of adaptive filters used in echo cancellation with combination of the GA with Least-Mean-Square (LMS) method. The results are compared for various values of LMS step size and diff...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1995